10 research outputs found
Sirenomelia-the mermaid syndrome: a rare invariably fatal congenital anomaly in a term unsupervised pregnancy
Sirenomelia is a rare congenital anomaly with an incidence of 0.8 to 1 case per 1,00,000 births. The prognosis is grim due to associated genitourinary and gastrointestinal anomalies. Antenatal registration in the first trimester and timely ultrasound go a long way in detection of the anamoly when termination can be still be offered and the mental agony of giving birth to a term neonate with a fatal congenital anomaly can be avoided.
CL-MAE: Curriculum-Learned Masked Autoencoders
Masked image modeling has been demonstrated as a powerful pretext task for
generating robust representations that can be effectively generalized across
multiple downstream tasks. Typically, this approach involves randomly masking
patches (tokens) in input images, with the masking strategy remaining unchanged
during training. In this paper, we propose a curriculum learning approach that
updates the masking strategy to continually increase the complexity of the
self-supervised reconstruction task. We conjecture that, by gradually
increasing the task complexity, the model can learn more sophisticated and
transferable representations. To facilitate this, we introduce a novel
learnable masking module that possesses the capability to generate masks of
different complexities, and integrate the proposed module into masked
autoencoders (MAE). Our module is jointly trained with the MAE, while adjusting
its behavior during training, transitioning from a partner to the MAE
(optimizing the same reconstruction loss) to an adversary (optimizing the
opposite loss), while passing through a neutral state. The transition between
these behaviors is smooth, being regulated by a factor that is multiplied with
the reconstruction loss of the masking module. The resulting training procedure
generates an easy-to-hard curriculum. We train our Curriculum-Learned Masked
Autoencoder (CL-MAE) on ImageNet and show that it exhibits superior
representation learning capabilities compared to MAE. The empirical results on
five downstream tasks confirm our conjecture, demonstrating that curriculum
learning can be successfully used to self-supervise masked autoencoders
Self-Supervised Predictive Convolutional Attentive Block for Anomaly Detection
Anomaly detection is commonly pursued as a one-class classification problem,
where models can only learn from normal training samples, while being evaluated
on both normal and abnormal test samples. Among the successful approaches for
anomaly detection, a distinguished category of methods relies on predicting
masked information (e.g. patches, future frames, etc.) and leveraging the
reconstruction error with respect to the masked information as an abnormality
score. Different from related methods, we propose to integrate the
reconstruction-based functionality into a novel self-supervised predictive
architectural building block. The proposed self-supervised block is generic and
can easily be incorporated into various state-of-the-art anomaly detection
methods. Our block starts with a convolutional layer with dilated filters,
where the center area of the receptive field is masked. The resulting
activation maps are passed through a channel attention module. Our block is
equipped with a loss that minimizes the reconstruction error with respect to
the masked area in the receptive field. We demonstrate the generality of our
block by integrating it into several state-of-the-art frameworks for anomaly
detection on image and video, providing empirical evidence that shows
considerable performance improvements on MVTec AD, Avenue, and ShanghaiTech. We
release our code as open source at https://github.com/ristea/sspcab.Comment: Accepted at CVPR 2022. Paper + supplementary (14 pages, 9 figures
New Metric for Evaluation of Deep Neural Network Applied in Vision-Based Systems
Vision-based object detection plays a crucial role for the complete functionality of many engineering systems. Typically, detectors or classifiers are used to detect objects or to distinguish different targets. This contribution presents a new evaluation of CNN classifiers in image detection using a modified Probability of Detection reliability measure. The proposed method allows the evaluation of further image parameters affecting the classification results. The proposed evaluation method is implemented on images and comparisons made on parameters with the best detection capability. A typical certification standard (90/95) denoting a 90% probability of detection at 95% reliability level is adapted and successfully applied. Using the 90/95 standard, comparisons are made between different image parameters. A noise analysis procedure is introduced, permitting the trade-off between the detection rate, false alarms, and process parameters. The advantage of the novel approach is experimentally evaluated for vision-based classification results of CNN considering different image parameters. With this new POD evaluation, classifiers will become a trustworthy part of vision systems
Self-supervised masked convolutional transformer block for anomaly detection
Anomaly detection has recently gained increasing attention in the field of computer vision, likely due to its broad set of applications ranging from product fault detection on industrial production lines and impending event detection in video surveillance to finding lesions in medical scans. Regardless of the domain, anomaly detection is typically framed as a one-class classification task, where the learning is conducted on normal examples only. An entire family of successful anomaly detection methods is based on learning to reconstruct masked normal inputs (e.g. patches, future frames, etc.) and exerting the magnitude of the reconstruction error as an indicator for the abnormality level. Unlike other reconstruction-based methods, we present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level. The proposed self-supervised block is extremely flexible, enabling information masking at any layer of a neural network and being compatible with a wide range of neural architectures. In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss. Furthermore, we show that our block is applicable to a wider variety of tasks, adding anomaly detection in medical images and thermal videos to the previously considered tasks based on RGB images and surveillance videos. We exhibit the generality and flexibility of SSMCTB by integrating it into multiple state-of-the-art neural models for anomaly detection, bringing forth empirical results that confirm considerable performance improvements on five benchmarks